31 research outputs found
Automatic Aorta Segmentation with Heavily Augmented, High-Resolution 3-D ResUNet: Contribution to the SEG.A Challenge
Automatic aorta segmentation from 3-D medical volumes is an important yet
difficult task. Several factors make the problem challenging, e.g. the
possibility of aortic dissection or the difficulty with segmenting and
annotating the small branches. This work presents a contribution by the MedGIFT
team to the SEG.A challenge organized during the MICCAI 2023 conference. We
propose a fully automated algorithm based on deep encoder-decoder architecture.
The main assumption behind our work is that data preprocessing and augmentation
are much more important than the deep architecture, especially in low data
regimes. Therefore, the solution is based on a variant of traditional
convolutional U-Net. The proposed solution achieved a Dice score above 0.9 for
all testing cases with the highest stability among all participants. The method
scored 1st, 4th, and 3rd in terms of the clinical evaluation, quantitative
results, and volumetric meshing quality, respectively. We freely release the
source code, pretrained model, and provide access to the algorithm on the
Grand-Challenge platform.Comment: MICCAI 2023 - SEG.A Challenge Contributio
High-Resolution Cranial Defect Reconstruction by Iterative, Low-Resolution, Point Cloud Completion Transformers
Each year thousands of people suffer from various types of cranial injuries
and require personalized implants whose manual design is expensive and
time-consuming. Therefore, an automatic, dedicated system to increase the
availability of personalized cranial reconstruction is highly desirable. The
problem of the automatic cranial defect reconstruction can be formulated as the
shape completion task and solved using dedicated deep networks. Currently, the
most common approach is to use the volumetric representation and apply deep
networks dedicated to image segmentation. However, this approach has several
limitations and does not scale well into high-resolution volumes, nor takes
into account the data sparsity. In our work, we reformulate the problem into a
point cloud completion task. We propose an iterative, transformer-based method
to reconstruct the cranial defect at any resolution while also being fast and
resource-efficient during training and inference. We compare the proposed
methods to the state-of-the-art volumetric approaches and show superior
performance in terms of GPU memory consumption while maintaining high-quality
of the reconstructed defects
Development of an artificial intelligence-based method for the diagnosis of the severity of myxomatous mitral valve disease from canine chest radiographs
An algorithm based on artificial intelligence (AI) was developed and tested to classify different stages of myxomatous mitral valve disease (MMVD) from canine thoracic radiographs. The radiographs were selected from the medical databases of two different institutions, considering dogs over 6 years of age that had undergone chest X-ray and echocardiographic examination. Only radiographs clearly showing the cardiac silhouette were considered. The convolutional neural network (CNN) was trained on both the right and left lateral and/or ventro-dorsal or dorso-ventral views. Each dog was classified according to the American College of Veterinary Internal Medicine (ACVIM) guidelines as stage B1, B2 or C + D. ResNet18 CNN was used as a classification network, and the results were evaluated using confusion matrices, receiver operating characteristic curves, and t-SNE and UMAP projections. The area under the curve (AUC) showed good heart-CNN performance in determining the MMVD stage from the lateral views with an AUC of 0.87, 0.77, and 0.88 for stages B1, B2, and C + D, respectively. The high accuracy of the algorithm in predicting the MMVD stage suggests that it could stand as a useful support tool in the interpretation of canine thoracic radiographs
The ACROBAT 2022 Challenge: Automatic Registration Of Breast Cancer Tissue
The alignment of tissue between histopathological whole-slide-images (WSI) is
crucial for research and clinical applications. Advances in computing, deep
learning, and availability of large WSI datasets have revolutionised WSI
analysis. Therefore, the current state-of-the-art in WSI registration is
unclear. To address this, we conducted the ACROBAT challenge, based on the
largest WSI registration dataset to date, including 4,212 WSIs from 1,152
breast cancer patients. The challenge objective was to align WSIs of tissue
that was stained with routine diagnostic immunohistochemistry to its
H&E-stained counterpart. We compare the performance of eight WSI registration
algorithms, including an investigation of the impact of different WSI
properties and clinical covariates. We find that conceptually distinct WSI
registration methods can lead to highly accurate registration performances and
identify covariates that impact performances across methods. These results
establish the current state-of-the-art in WSI registration and guide
researchers in selecting and developing methods
Learn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning
Image registration is a fundamental medical image analysis task, and a wide
variety of approaches have been proposed. However, only a few studies have
comprehensively compared medical image registration approaches on a wide range
of clinically relevant tasks. This limits the development of registration
methods, the adoption of research advances into practice, and a fair benchmark
across competing approaches. The Learn2Reg challenge addresses these
limitations by providing a multi-task medical image registration data set for
comprehensive characterisation of deformable registration algorithms. A
continuous evaluation will be possible at
https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of
anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR),
availability of annotations, as well as intra- and inter-patient registration
evaluation. We established an easily accessible framework for training and
validation of 3D registration methods, which enabled the compilation of results
of over 65 individual method submissions from more than 20 unique teams. We
used a complementary set of metrics, including robustness, accuracy,
plausibility, and runtime, enabling unique insight into the current
state-of-the-art of medical image registration. This paper describes datasets,
tasks, evaluation methods and results of the challenge, as well as results of
further analysis of transferability to new datasets, the importance of label
supervision, and resulting bias. While no single approach worked best across
all tasks, many methodological aspects could be identified that push the
performance of medical image registration to new state-of-the-art performance.
Furthermore, we demystified the common belief that conventional registration
methods have to be much slower than deep-learning-based methods
Invnet ::a deep learning approach to invert complex deformation fields
Inverting a deformation field is a crucial part for numerous image registration methods and has an important impact on the final registration results. There are methods that work well for small and relatively simple deformations. However, a problem arises when the deformation field consists of complex and large deformations, potentially including folding. For such cases, the state-of-the-art methods fail and the inversion results are unpredictable. In this article, we propose a deep network using the encoder-decoder architecture to improve the inverse calculation. The network is trained using deformations randomly generated using various transformation models and their compositions, with a symmetric inverse consistency error as the cost function. The results are validated using synthetic deformations resembling real ones, as well as deformation fields calculated during registration of real histology data. We show that the proposed method provides an approximate inverse with a lower error than the current state-of-the-art methods
DeepHistReg ::unsupervised deep learning registration framework for differently stained histology samples
Background and objective : The use of several stains during histology sample preparation can be useful for fusing complementary information about different tissue structures. It reveals distinct tissue properties that combined may be useful for grading, classification, or 3-D reconstruction. Nevertheless, since the slide preparation is different for each stain and the procedure uses consecutive slices, the tissue undergoes complex and possibly large deformations. Therefore, a nonrigid registration is required before further processing. The nonrigid registration of differently stained histology images is a challenging task because: (i) the registration must be fully automatic, (ii) the histology images are extremely high-resolution, (iii) the registration should be as fast as possible, (iv) there are significant differences in the tissue appearance, and (v) there are not many unique features due to a repetitive texture. Methods :In this article, we propose a deep learning-based solution to the histology registration. We describe a registration framework dedicated to high-resolution histology images that can perform the registration in real-time. The framework consists of an automatic background segmentation, iterative initial rotation search and learning-based affine/nonrigid registration. Results : We evaluate our approach using an open dataset provided for the Automatic Non-rigid Histological Image Registration (ANHIR) challenge organized jointly with the IEEE ISBI 2019 conference. We compare our solution to the challenge participants using a server-side evaluation tool provided by the challenge organizers. Following the challenge evaluation criteria, we use the target registration error (TRE) as the evaluation metric. Our algorithm provides registration accuracy close to the best scoring teams (median rTRE 0.19% of the image diagonal) while being significantly faster (the average registration time is about 2 seconds). Conclusions : The proposed framework provides results, in terms of the TRE, comparable to the best-performing state-of-the-art methods. However, it is significantly faster, thus potentially more useful in clinical practice where a large number of histology images are being processed. The proposed method is of particular interest to researchers requiring an accurate, real-time, nonrigid registration of high-resolution histology images for whom the processing time of traditional, iterative methods in unacceptable. We provide free access to the software implementation of the method, including training and inference code, as well as pretrained models. Since the ANHIR dataset is open, this makes the results fully and easily reproducible
Learning-based affine registration of histological images
The use of different stains for histological sample preparation reveals distinct tissue properties and may result in a more accurate diagnosis. However, as a result of the staining process, the tissue slides are being deformed and registration is required before further processing. The importance of this problem led to organizing an open challenge named Automatic Non-rigid Histological Image Registration Challenge (ANHIR), organized jointly with the IEEE ISBI 2019 conference. The challenge organizers provided several hundred image pairs and a server-side evaluation platform. One of the most difficult sub-problems for the challenge participants was to find an initial, global transform, before attempting to calculate the final, non-rigid deformation field. This article solves the problem by proposing a deep network trained in an unsupervised way with a good generalization. We propose a method that works well for images with different resolutions, aspect ratios, without the necessity to perform image padding, while maintaining a low number of network parameters and fast forward pass time. The proposed method is orders of magnitude faster than the classical approach based on the iterative similarity metric optimization or computer vision descriptors. The success rate is above 98% for both the training set and the evaluation set. We make both the training and inference code freely available
Contact-Free Multispectral Identity Verification System Using Palm Veins and Deep Neural Network
Devices and systems secured by biometric factors became a part of our lives because they are convenient, easy to use, reliable, and secure. They use information about unique features of our bodies in order to authenticate a user. It is possible to enhance the security of these devices by adding supplementary modality while keeping the user experience at the same level. Palm vein systems are based on infrared wavelengths used for capturing images of users’ veins. It is both convenient for the user, and it is one of the most secure biometric solutions. The proposed system uses IR and UV wavelengths; the images are then processed by a deep convolutional neural network for extraction of biometric features and authentication of users. We tested the system in a verification scenario that consisted of checking if the images collected from the user contained the same biometric features as those in the database. The True Positive Rate (TPR) achieved by the system when the information from the two modalities were combined was 99.5% by the threshold of acceptance set to the Equal Error Rate (EER)